Power Aware Hybrid Proxy Cache-Prefetch Model using Combined Energy Conservation Policies
نویسندگان
چکیده
The World Wide Web (WWW) is growing exponentially in terms of number of users and number of Web applications. Due to enormous traffic in the network and several factors like bandwidth availability, request processing time at server, round trip time and object size, the Web latency is increasing. The sophisticated integration of Web prefetching and caching deployed at proxy server with Web log mining technique is the most attractive and successful solution to reduce this latency and improve Web Quality of Service. To provide uninterrupted services to the Web users’, Web servers at the data centres are seamlessly operating always in 24X7 mode. Low power consumption and efficient energy management of individual hardware components, system software, to network applications is a critical and urgent issue in today’s highly demanding eco-friendly computing world. In this paper, we incorporate an energy efficient feedback driven control framework along with sleep proxy mechanism in our Hybrid Cache-Prefetch System Model. The proposed model periodically monitors proxy server performance parameters dynamically. We have also developed an optimized load allocation algorithm with the proposed combined strategy for proxy servers’ cluster along with some guidelines for energy efficient design solutions towards green IT. Even though we emphasize power saving as much as possible the performance of proxy servers is ensured without sacrificing on client experiences. Experimental results show that using our proposed scheme 28% less power consumption and 18% power efficiency improvement can be achieved. Keywords— Proxy Cache-Prefetch model, Power and Energy Estimation, Feedback Driven Control Framework, Sleep Proxy, Optimal Load Allocation.
منابع مشابه
Energy Efficient Proxy Prefetch-Cache Framework in Clustered Architecture
The dynamic nature and explosive growth of the World Wide Web (WWW) makes a big challenge for retrieving and satisfying the Web clients of multiple varieties of interests with updated information and documents around the globe. Due to enormous traffic in the network and various constraints such as limited bandwidth availability, processing time at server, round trip delay etc., the Web latency ...
متن کاملWeb Latency Minimization Using Hybrid Proxy Prefetch-cache Framework with Power and Energy Efficiency
The exponential growth of the World Wide Web (WWW) in terms of size, processing power and advanced software sophistication which leads to enormous traffic in the widely distributed network and several performance diminishing factors like bandwidth availability, request processing time at heavily overloaded Web servers, round trip time and Web object’s size, the Web latency is continuously incre...
متن کاملAdaptive Power-Aware Cache Management for Mobile Computing Systems
Prefetch can be used to reduce the query latency and improve the bandwidth utilization of cache invalidation schemes. However, prefetch consumes power. In this paper, we propose a power-aware cache management to address this issue. Based on a novel prefetch-access ratio concept, the proposed scheme can dynamically optimize performance or power based on the available resources and performance re...
متن کاملPower-Aware Prefetch in Mobile Environments
Most of the prefetch techniques used in the current cache management schemes do not consider the power constraints of the mobile clients and other factors such as the size of the data items, the data access rate, and the data update rate. In this paper, we address these issues by proposing a power-aware prefetch scheme, called value-based adaptive prefetch (VAP) scheme. The VAP scheme defines a...
متن کاملA Performance Study of Instruction Cache Prefetching Methods
Prefetching methods for instruction caches are studied via trace-driven simulation. The two primary methods are “fallthrough” prefetch (sometimes referred to as “one block lookahead”) and “target” prefetch. Fall-through prefetches are for sequential line accesses, and a key parameter is the distance from the end of the current line where the prefetch for the next line is initiated. Target prefe...
متن کامل